Efficient reinforcement learning through Evolutionary Acquisition of Neural Topologies

نویسندگان

  • Yohannes Kassahun
  • Gerald Sommer
چکیده

In this paper we present a novel method, called Evolutionary Acquisition of Neural Topologies (EANT), of evolving the structure and weights of neural networks. The method introduces an efficient and compact genetic encoding of a neural network onto a linear genome that enables one to evaluate the network without decoding it. The method explores new structures whenever it is not possible to further exploit the structures found so far. This enables it to find minimal neural structures for solving a given learning task. We tested the algorithm on a benchmark control task and found it to perform very well.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Automatic Neural Robot Controller Design using Evolutionary Acquisition of Neural Topologies

In this paper we present an automatic design of neural controllers for robots using a method called Evolutionary Acquisition of Neural Topologies (EANT). The method evolves both the structure and weights of neural networks. It starts with networks of minimal structures determined by the domain expert and increases their complexity along the evolution path. It introduces an efficient and compact...

متن کامل

Self-Organisation of Neural Topologies by Evolutionary Reinforcement Learning

In this article we present EANT, “Evolutionary Acquisition of Neural Topologies”, a method that creates neural networks (NNs) by evolutionary reinforcement learning. The structure of NNs is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES. EANT can create NNs that are very specialised; they achieve a very good performance while b...

متن کامل

Evolutionary reinforcement learning of artificial neural networks

In this article we describe EANT2, Evolutionary Acquisition of Neural Topologies, Version 2, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of ev...

متن کامل

Analysis of an evolutionary reinforcement learning method in a multiagent domain

Many multiagent problems comprise subtasks which can be considered as reinforcement learning (RL) problems. In addition to classical temporal difference methods, evolutionary algorithms are among the most promising approaches for such RL problems. The relative performance of these approaches in certain subdomains (e. g. multiagent learning) of the general RL problem remains an open question at ...

متن کامل

Evolving Neural Network through Augmenting Topologies

An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005